This paper describes several improvements to a new method for signal decomposition that we recently formulated under the name of Differentiable Dictionary Search (DDS). The fundamental idea of DDS is to exploit a class of powerful deep invertible density estimators called normalizing flows, to model the dictionary in a linear decomposition method such as NMF, effectively creating a bijection between the space of dictionary elements and the associated probability space, allowing a differentiable search through the dictionary space, guided by the estimated densities. As the initial formulation was a proof of concept with some practical limitations, we will present several steps towards making it scalable, hoping to improve both the computational complexity of the method and its signal decomposition capabilities. As a testbed for experimental evaluation, we choose the task of frame-level piano transcription, where the signal is to be decomposed into sources whose activity is attributed to individual piano notes. To highlight the impact of improved non-linear modelling of sources, we compare variants of our method to a linear overcomplete NMF baseline. Experimental results will show that even in the absence of additional constraints, our models produce increasingly sparse and precise decompositions, according to two pertinent evaluation measures.
translated by 谷歌翻译
We introduce a novel way to incorporate prior information into (semi-) supervised non-negative matrix factorization, which we call differentiable dictionary search. It enables general, highly flexible and principled modelling of mixtures where non-linear sources are linearly mixed. We study its behavior on an audio decomposition task, and conduct an extensive, highly controlled study of its modelling capabilities.
translated by 谷歌翻译
Audio Spectrogram Transformer models rule the field of Audio Tagging, outrunning previously dominating Convolutional Neural Networks (CNNs). Their superiority is based on the ability to scale up and exploit large-scale datasets such as AudioSet. However, Transformers are demanding in terms of model size and computational requirements compared to CNNs. We propose a training procedure for efficient CNNs based on offline Knowledge Distillation (KD) from high-performing yet complex transformers. The proposed training schema and the efficient CNN design based on MobileNetV3 results in models outperforming previous solutions in terms of parameter and computational efficiency and prediction performance. We provide models of different complexity levels, scaling from low-complexity models up to a new state-of-the-art performance of .483 mAP on AudioSet. Source Code available at: https://github.com/fschmid56/EfficientAT
translated by 谷歌翻译
节奏是复杂的结构,从对立的复合物的开始一直在推动音乐,直到今天。检测此类结构对于许多MIR任务,例如音乐分析,关键检测或音乐分割至关重要。但是,自动节奏检测仍然具有挑战性,主要是因为它涉及和谐,语音领导和节奏等高级音乐元素的结合。在这项工作中,我们提出了符号分数的图表表示,作为解决节奏检测任务的中间手段。我们使用图形卷积网络将节奏检测作为不平衡的节点分类问题。我们获得了与最新技术大致相当的结果,并且我们提出了一个模型,该模型能够以多个粒度的粒度进行预测,从单个音符到节拍,这要归功于良好的注释,注释。此外,我们的实验表明,图形卷积可以学习有助于节奏检测的非本地特征,从而使我们摆脱了必须设计编码非本地环境的专业特征。我们认为,这种建模音乐得分和分类任务的一般方法具有许多潜在的优势,而不是此处介绍的具体识别任务。
translated by 谷歌翻译
当前的解释应用于音乐数据的深度学习系统的方法可在低级功能空间中,例如,通过突出钢琴卷中的频谱图或时机垃圾箱中的潜在相关时间频率箱。这可能很难理解,尤其是对于没有技术知识的音乐学家而言。为了解决这个问题,我们专注于基于高级音乐概念的更具人为友好的解释。我们的研究针对经过训练的系统(事后解释)并探讨了两种方法:一种受监督的方法,用户可以定义音乐概念并测试它是否与系统相关;以及无监督的内容,其中包含相关概念的音乐摘录将自动选择并给予用户进行解释。我们在现有的符号作曲家分类系统上展示了这两种技术,展示其潜力并突出其内在局限性。
translated by 谷歌翻译
在许多深度学习的应用领域中,缺乏大型标记的数据集仍然是一个重大挑战。研究人员和从业人员通常求助于转移学习和数据增强以减轻此问题。我们通过自然语言查询(Dcase 2022 Challenge的任务6B)在音频检索的背景下研究这些策略。我们提出的系统使用预训练的嵌入模型将记录和文本描述投影到共享的音频捕获空间中,其中不同模式的相关示例接近。我们在音频和文本输入上采用各种数据增强技术,并通过基于顺序的模型优化系统地调整其相应的超参数。我们的结果表明,使用的增强策略降低了过度拟合并提高检索性能。我们进一步表明,在AudioCaps数据集上进行预训练系统会带来其他改进。
translated by 谷歌翻译
用于标记和分类声信号的标准机器学习模型无法处理训练过程中未见的类。通过基于适应性的类描述来预测类,零射击(ZS)学习克服了这一限制。这项研究旨在研究基于自我注意力的音频嵌入体系结构对ZS学习的有效性。为此,我们将最近的贴布频谱变压器与两个经典的卷积体系结构进行了比较。我们在三个任务和三个不同的基准数据集上评估了这三个架构:在Audioset上的通用标记,ESC-50上的环境声音分类以及OpenMIC上的仪器标记。我们的结果表明,基于自我注意的嵌入方法的表现都优于所有这些设置中的卷积架构。通过相应地设计培训和测试数据,我们观察到,当训练和新测试类之间的“语义距离”很大时,预测性能会大大受到影响,这种效果值得进行更详细的研究。
translated by 谷歌翻译
我们介绍了一种自动页面转动系统的原型,可直接在实际分数上工作,即表格图像,没有任何符号表示。我们的系统基于多模态神经网络架构,该架构将完整的纸张图像页面视为输入,侦听到传入的音乐性能,并预测图像中的相应位置。使用我们系统的位置估计,我们使用简单的启发式触发纸张图像内的某个位置一旦达到页面转向事件。作为概念证明,我们将使用实际机器进一步将我们的系统与实际机器合并,该机器将在处理命令上进行物理转动。
translated by 谷歌翻译
情绪分析通常是许多注释器给出的主观标签的众群任务。尚未完全理解每个注释器的注释偏差如何使用最先进的方法正确建模。但是,精确且可靠地解决了注释偏见是了解注释器标记行为的关键,并成功解决有关注释任务的相应个人误解和不法行为。我们的贡献是精确神经端到端偏置建模和地面真理估计的解释和改进,这减少了对现有最先进的现有的不期望的不匹配。分类实验表明,在每个样品仅被一个单个注释器注释的情况下,它具有提高准确性。我们公开提供整个源代码,并释放包含10,000个句子的自己的域特定情绪数据集,讨论有机食品。这些蔓延从社交媒体上爬行,并由10名非专家注释器单独标记。
translated by 谷歌翻译
Data-centric artificial intelligence (data-centric AI) represents an emerging paradigm emphasizing that the systematic design and engineering of data is essential for building effective and efficient AI-based systems. The objective of this article is to introduce practitioners and researchers from the field of Information Systems (IS) to data-centric AI. We define relevant terms, provide key characteristics to contrast the data-centric paradigm to the model-centric one, and introduce a framework for data-centric AI. We distinguish data-centric AI from related concepts and discuss its longer-term implications for the IS community.
translated by 谷歌翻译